184 research outputs found

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    A habituation account of change detection in same/different judgments

    Get PDF
    We investigated the basis of change detection in a short-term priming task. In two experiments, participants were asked to indicate whether or not a target word was the same as a previously presented cue. Data from an experiment measuring magnetoencephalography failed to find different patterns for “same” and “different” responses, consistent with the claim that both arise from a common neural source, with response magnitude defining the difference between immediate novelty versus familiarity. In a behavioral experiment, we tested and confirmed the predictions of a habituation account of these judgments by comparing conditions in which the target, the cue, or neither was primed by its presentation in the previous trial. As predicted, cue-primed trials had faster response times, and target-primed trials had slower response times relative to the neither-primed baseline. These results were obtained irrespective of response repetition and stimulus–response contingencies. The behavioral and brain activity data support the view that detection of change drives performance in these tasks and that the underlying mechanism is neuronal habituation

    Transient Responses to Rapid Changes in Mean and Variance in Spiking Models

    Get PDF
    The mean input and variance of the total synaptic input to a neuron can vary independently, suggesting two distinct information channels. Here we examine the impact of rapidly varying signals, delivered via these two information conduits, on the temporal dynamics of neuronal firing rate responses. We examine the responses of model neurons to step functions in either the mean or the variance of the input current. Our results show that the temporal dynamics governing response onset depends on the choice of model. Specifically, the existence of a hard threshold introduces an instantaneous component into the response onset of a leaky-integrate-and-fire model that is not present in other models studied here. Other response features, for example a decaying oscillatory approach to a new steady-state firing rate, appear to be more universal among neuronal models. The decay time constant of this approach is a power-law function of noise magnitude over a wide range of input parameters. Understanding how specific model properties underlie these response features is important for understanding how neurons will respond to rapidly varying signals, as the temporal dynamics of the response onset and response decay to new steady-state determine what range of signal frequencies a population of neurons can respond to and faithfully encode

    Shunting Inhibition Controls the Gain Modulation Mediated by Asynchronous Neurotransmitter Release in Early Development

    Get PDF
    The sensitivity of a neuron to its input can be modulated in several ways. Changes in the slope of the neuronal input-output curve depend on factors such as shunting inhibition, background noise, frequency-dependent synaptic excitation, and balanced excitation and inhibition. However, in early development GABAergic interneurons are excitatory and other mechanisms such as asynchronous transmitter release might contribute to regulating neuronal sensitivity. We modeled both phasic and asynchronous synaptic transmission in early development to study the impact of activity-dependent noise and short-term plasticity on the synaptic gain. Asynchronous release decreased or increased the gain depending on the membrane conductance. In the high shunt regime, excitatory input due to asynchronous release was divisive, whereas in the low shunt regime it had a nearly multiplicative effect on the firing rate. In addition, sensitivity to correlated inputs was influenced by shunting and asynchronous release in opposite ways. Thus, asynchronous release can regulate the information flow at synapses and its impact can be flexibly modulated by the membrane conductance

    Modelling fast forms of visual neural plasticity using a modified second-order motion energy model

    Get PDF
    The Adelson-Bergen motion energy sensor is well established as the leading model of low-level visual motion sensing in human vision. However, the standard model cannot predict adaptation effects in motion perception. A previous paper Pavan et al.(Journal of Vision 10:1-17, 2013) presented an extension to the model which uses a first-order RC gain-control circuit (leaky integrator) to implement adaptation effects which can span many seconds, and showed that the extended model's output is consistent with psychophysical data on the classic motion after-effect. Recent psychophysical research has reported adaptation over much shorter time periods, spanning just a few hundred milliseconds. The present paper further extends the sensor model to implement rapid adaptation, by adding a second-order RC circuit which causes the sensor to require a finite amount of time to react to a sudden change in stimulation. The output of the new sensor accounts accurately for psychophysical data on rapid forms of facilitation (rapid visual motion priming, rVMP) and suppression (rapid motion after-effect, rMAE). Changes in natural scene content occur over multiple time scales, and multi-stage leaky integrators of the kind proposed here offer a computational scheme for modelling adaptation over multiple time scales. © 2014 Springer Science+Business Media New York

    Predicting Spike Occurrence and Neuronal Responsiveness from LFPs in Primary Somatosensory Cortex

    Get PDF
    Local Field Potentials (LFPs) integrate multiple neuronal events like synaptic inputs and intracellular potentials. LFP spatiotemporal features are particularly relevant in view of their applications both in research (e.g. for understanding brain rhythms, inter-areal neural communication and neronal coding) and in the clinics (e.g. for improving invasive Brain-Machine Interface devices). However the relation between LFPs and spikes is complex and not fully understood. As spikes represent the fundamental currency of neuronal communication this gap in knowledge strongly limits our comprehension of neuronal phenomena underlying LFPs. We investigated the LFP-spike relation during tactile stimulation in primary somatosensory (S-I) cortex in the rat. First we quantified how reliably LFPs and spikes code for a stimulus occurrence. Then we used the information obtained from our analyses to design a predictive model for spike occurrence based on LFP inputs. The model was endowed with a flexible meta-structure whose exact form, both in parameters and structure, was estimated by using a multi-objective optimization strategy. Our method provided a set of nonlinear simple equations that maximized the match between models and true neurons in terms of spike timings and Peri Stimulus Time Histograms. We found that both LFPs and spikes can code for stimulus occurrence with millisecond precision, showing, however, high variability. Spike patterns were predicted significantly above chance for 75% of the neurons analysed. Crucially, the level of prediction accuracy depended on the reliability in coding for the stimulus occurrence. The best predictions were obtained when both spikes and LFPs were highly responsive to the stimuli. Spike reliability is known to depend on neuron intrinsic properties (i.e. on channel noise) and on spontaneous local network fluctuations. Our results suggest that the latter, measured through the LFP response variability, play a dominant role

    Contrast and Phase Combination in Binocular Vision

    Get PDF
    BACKGROUND: How the visual system combines information from the two eyes to form a unitary binocular representation of the external world is a fundamental question in vision science that has been the focus of many psychophysical and physiological investigations. Ding & Sperling (2006) measured perceived phase of the cyclopean image, and developed a binocular combination model in which each eye exerts gain control on the other eye's signal and over the other eye's gain control. Critically, the relative phase of the monocular sine-waves plays a central role. METHODOLOGY/PRINCIPAL FINDINGS: We used the Ding-Sperling paradigm but measured both the perceived contrast and phase of cyclopean images in three hundred and eighty combinations of base contrast, interocular contrast ratio, eye origin of the probe, and interocular phase difference. We found that the perceived contrast of the cyclopean image was independent of the relative phase of the two monocular gratings, although the perceived phase depended on the relative phase and contrast ratio of the monocular images. We developed a new multi-pathway contrast-gain control model (MCM) that elaborates the Ding-Sperling binocular combination model in two ways: (1) phase and contrast of the cyclopean images are computed in separate pathways, although with shared cross-eye contrast-gain control; and (2) phase-independent local energy from the two monocular images are used in binocular contrast combination. With three free parameters, the model yielded an excellent account of data from all the experimental conditions. CONCLUSIONS/SIGNIFICANCE: Binocular phase combination depends on the relative phase and contrast ratio of the monocular images but binocular contrast combination is phase-invariant. Our findings suggest the involvement of at least two separate pathways in binocular combination

    A Multi-Stage Model for Fundamental Functional Properties in Primary Visual Cortex

    Get PDF
    Many neurons in mammalian primary visual cortex have properties such as sharp tuning for contour orientation, strong selectivity for motion direction, and insensitivity to stimulus polarity, that are not shared with their sub-cortical counterparts. Successful models have been developed for a number of these properties but in one case, direction selectivity, there is no consensus about underlying mechanisms. We here define a model that accounts for many of the empirical observations concerning direction selectivity. The model describes a single column of cat primary visual cortex and comprises a series of processing stages. Each neuron in the first cortical stage receives input from a small number of on-centre and off-centre relay cells in the lateral geniculate nucleus. Consistent with recent physiological evidence, the off-centre inputs to cortex precede the on-centre inputs by a small (∼4 ms) interval, and it is this difference that confers direction selectivity on model neurons. We show that the resulting model successfully matches the following empirical data: the proportion of cells that are direction selective; tilted spatiotemporal receptive fields; phase advance in the response to a stationary contrast-reversing grating stepped across the receptive field. The model also accounts for several other fundamental properties. Receptive fields have elongated subregions, orientation selectivity is strong, and the distribution of orientation tuning bandwidth across neurons is similar to that seen in the laboratory. Finally, neurons in the first stage have properties corresponding to simple cells, and more complex-like cells emerge in later stages. The results therefore show that a simple feed-forward model can account for a number of the fundamental properties of primary visual cortex

    A Normalization Model of Attentional Modulation of Single Unit Responses

    Get PDF
    Although many studies have shown that attention to a stimulus can enhance the responses of individual cortical sensory neurons, little is known about how attention accomplishes this change in response. Here, we propose that attention-based changes in neuronal responses depend on the same response normalization mechanism that adjusts sensory responses whenever multiple stimuli are present. We have implemented a model of attention that assumes that attention works only through this normalization mechanism, and show that it can replicate key effects of attention. The model successfully explains how attention changes the gain of responses to individual stimuli and also why modulation by attention is more robust and not a simple gain change when multiple stimuli are present inside a neuron's receptive field. Additionally, the model accounts well for physiological data that measure separately attentional modulation and sensory normalization of the responses of individual neurons in area MT in visual cortex. The proposal that attention works through a normalization mechanism sheds new light a broad range of observations on how attention alters the representation of sensory information in cerebral cortex

    Non-Linear Neuronal Responses as an Emergent Property of Afferent Networks: A Case Study of the Locust Lobula Giant Movement Detector

    Get PDF
    In principle it appears advantageous for single neurons to perform non-linear operations. Indeed it has been reported that some neurons show signatures of such operations in their electrophysiological response. A particular case in point is the Lobula Giant Movement Detector (LGMD) neuron of the locust, which is reported to locally perform a functional multiplication. Given the wide ramifications of this suggestion with respect to our understanding of neuronal computations, it is essential that this interpretation of the LGMD as a local multiplication unit is thoroughly tested. Here we evaluate an alternative model that tests the hypothesis that the non-linear responses of the LGMD neuron emerge from the interactions of many neurons in the opto-motor processing structure of the locust. We show, by exposing our model to standard LGMD stimulation protocols, that the properties of the LGMD that were seen as a hallmark of local non-linear operations can be explained as emerging from the dynamics of the pre-synaptic network. Moreover, we demonstrate that these properties strongly depend on the details of the synaptic projections from the medulla to the LGMD. From these observations we deduce a number of testable predictions. To assess the real-time properties of our model we applied it to a high-speed robot. These robot results show that our model of the locust opto-motor system is able to reliably stabilize the movement trajectory of the robot and can robustly support collision avoidance. In addition, these behavioural experiments suggest that the emergent non-linear responses of the LGMD neuron enhance the system's collision detection acuity. We show how all reported properties of this neuron are consistently reproduced by this alternative model, and how they emerge from the overall opto-motor processing structure of the locust. Hence, our results propose an alternative view on neuronal computation that emphasizes the network properties as opposed to the local transformations that can be performed by single neurons
    corecore